65 research outputs found
Embeddings of Decomposition Spaces into Sobolev and BV Spaces
In the present paper, we investigate whether an embedding of a decomposition
space into a given Sobolev space
exists. As special cases, this includes embeddings
into Sobolev spaces of (homogeneous and inhomogeneous) Besov spaces,
()-modulation spaces, shearlet smoothness spaces and also of a large
class of wavelet coorbit spaces, in particular of shearlet-type coorbit spaces.
Precisely, we will show that under extremely mild assumptions on the covering
, we have
as soon as and
hold. Here, and the weight
can be easily computed, only based on the covering
and on the parameters .
Conversely, a necessary condition for existence of the embedding is that
and
hold, where denotes the space of finitely supported
sequences on .
All in all, for the range , we obtain a complete
characterization of existence of the embedding in terms of readily verifiable
criteria. We can also completely characterize existence of an embedding of a
decomposition space into a BV space
Optimal approximation of piecewise smooth functions using deep ReLU neural networks
We study the necessary and sufficient complexity of ReLU neural networks---in
terms of depth and number of weights---which is required for approximating
classifier functions in . As a model class, we consider the set
of possibly discontinuous piecewise
functions , where the different smooth regions
of are separated by hypersurfaces. For dimension ,
regularity , and accuracy , we construct artificial
neural networks with ReLU activation function that approximate functions from
up to error of . The
constructed networks have a fixed number of layers, depending only on and
, and they have many nonzero weights,
which we prove to be optimal. In addition to the optimality in terms of the
number of weights, we show that in order to achieve the optimal approximation
rate, one needs ReLU networks of a certain depth. Precisely, for piecewise
functions, this minimal depth is given---up to a
multiplicative constant---by . Up to a log factor, our constructed
networks match this bound. This partly explains the benefits of depth for ReLU
networks by showing that deep networks are necessary to achieve efficient
approximation of (piecewise) smooth functions. Finally, we analyze
approximation in high-dimensional spaces where the function to be
approximated can be factorized into a smooth dimension reducing feature map
and classifier function ---defined on a low-dimensional feature
space---as . We show that in this case the approximation rate
depends only on the dimension of the feature space and not the input dimension.Comment: Generalized some estimates to norms for $0<p<\infty
Wavelet Coorbit Spaces viewed as Decomposition Spaces
In this paper we show that the Fourier transform induces an isomorphism
between the coorbit spaces defined by Feichtinger and Gr\"ochenig of the mixed,
weighted Lebesgue spaces with respect to the quasi-regular
representation of a semi-direct product with suitably
chosen dilation group , and certain decomposition spaces
(essentially as
introduced by Feichtinger and Gr\"obner), where the localized ,,parts`` of a
function are measured in the -norm.
This equivalence is useful in several ways: It provides access to a
Fourier-analytic understanding of wavelet coorbit spaces, and it allows to
discuss coorbit spaces associated to different dilation groups in a common
framework. As an illustration of these points, we include a short discussion of
dilation invariance properties of coorbit spaces associated to different types
of dilation groups
Approximation in with deep ReLU neural networks
We discuss the expressive power of neural networks which use the non-smooth
ReLU activation function by analyzing the
approximation theoretic properties of such networks. The existing results
mainly fall into two categories: approximation using ReLU networks with a fixed
depth, or using ReLU networks whose depth increases with the approximation
accuracy. After reviewing these findings, we show that the results concerning
networks with fixed depth--- which up to now only consider approximation in
for the Lebesgue measure --- can be generalized to
approximation in , for any finite Borel measure . In particular,
the generalized results apply in the usual setting of statistical learning
theory, where one is interested in approximation in , with the
probability measure describing the distribution of the data.Comment: Accepted for presentation at SampTA 201
Design and properties of wave packet smoothness spaces
We introduce a family of quasi-Banach spaces - which we call wave packet
smoothness spaces - that includes those function spaces which can be
characterised by the sparsity of their expansions in Gabor frames, wave atoms,
and many other frame constructions. We construct Banach frames for and atomic
decompositions of the wave packet smoothness spaces and study their embeddings
in each other and in a few more classical function spaces such as Besov and
Sobolev spaces.Comment: accepted for publication in Journal de Math\'ematiques Pures et
Appliqu\'ee
From Frazier-Jawerth characterizations of Besov spaces to Wavelets and Decomposition spaces
This article describes how the ideas promoted by the fundamental papers
published by M. Frazier and B. Jawerth in the eighties have influenced
subsequent developments related to the theory of atomic decompositions and
Banach frames for function spaces such as the modulation spaces and
Besov-Triebel-Lizorkin spaces.
Both of these classes of spaces arise as special cases of two different,
general constructions of function spaces: coorbit spaces and decomposition
spaces. Coorbit spaces are defined by imposing certain decay conditions on the
so-called voice transform of the function/distribution under consideration. As
a concrete example, one might think of the wavelet transform, leading to the
theory of Besov-Triebel-Lizorkin spaces.
Decomposition spaces, on the other hand, are defined using certain
decompositions in the Fourier domain. For Besov-Triebel-Lizorkin spaces, one
uses a dyadic decomposition, while a uniform decomposition yields modulation
spaces.
Only recently, the second author has established a fruitful connection
between modern variants of wavelet theory with respect to general dilation
groups (which can be treated in the context of coorbit theory) and a particular
family of decomposition spaces. In this way, optimal inclusion results and
invariance properties for a variety of smoothness spaces can be established. We
will present an outline of these connections and comment on the basic results
arising in this context
The universal approximation theorem for complex-valued neural networks
We generalize the classical universal approximation theorem for neural
networks to the case of complex-valued neural networks. Precisely, we consider
feedforward networks with a complex activation function in which each neuron performs the operation with weights and
a bias , and with applied componentwise. We
completely characterize those activation functions for which the
associated complex networks have the universal approximation property, meaning
that they can uniformly approximate any continuous function on any compact
subset of arbitrarily well.
Unlike the classical case of real networks, the set of "good activation
functions" which give rise to networks with the universal approximation
property differs significantly depending on whether one considers deep networks
or shallow networks: For deep networks with at least two hidden layers, the
universal approximation property holds as long as is neither a
polynomial, a holomorphic function, or an antiholomorphic function. Shallow
networks, on the other hand, are universal if and only if the real part or the
imaginary part of is not a polyharmonic function
- …